I think that your analysis is underestimating risks from nuclear weapons—I don’t say nuclear war, because it is two different stories somehow.
Some points to consider:
The all humanity could be killed by just one nuclear weapon if one put inside one good supervolcano and cause it to erupt. I estimate that using 100 nuclear warheads one could provoke eruptions of maybe 20 supervolcanos.
Some rogue country could create stationary doomsday bomb which could create enough radioactive fallout to kill most of humanity.
Nuclear weapons production is going became much simple and cheaper because of laser enrichment and other thing.
A rogue superpower—may I use this oxymoron? - could attack 400 existing nuclear reactors and nuclear waste stores with its missiles creating fallout equal to doomsday machine.
Time of war is time of accelerated development of different weapons, and even limited nuclear war will lead to development of large new nanothec and biotech arsenals. Like WW2 lead to creating nuclear weapons.
In time of nuclear war there are chances that existing stockpiles of bioweapons would be accidentally released. Even North Korea is said to have weaponized bird flu.
I wrote more about these and other options in the article:
Thanks for mentioning these, and the link. I was putting some of these possibilities under the category of technological tshifts (like laser enrichment). Existing bioweapons don’t seem to be extinction risks, but future super-biotech threats I would put under the category of “transformative technologies” including super synthetic bio and AI that the post sets aside for purposes of looking at nukes alone.
Anders Sandberg has also written about the radiation dispersal/cobalt bomb and volcano trigger approaches.
A rogue superpower—may I use this oxymoron? - could attack 400 existing nuclear reactors and nuclear waste stores with its missiles creating fallout equal to doomsday machine.
Keep in mind that in a nuclear war, even if the nuclear reactors are not particularly well targeted, many (most?) reactors are going to melt down due to having been left unattended, and spent fuel pools may catch fire too.
@Carl:
I think you dramatically under-estimate both the probability and the consequences of the nuclear war (by ignoring the non-small probability of massive worsening of the political relations, or reversal of tentative trends of less warfare).
That’s quite annoying to see, the self proclaimed “existential risk experts” (professional mediocrities) increasing the risks through undermining and under-estimating things that are not fancy pet causes from the modern popular culture. Leave it to the actual scientists to occasionally give their opinions about, please, they’re simply smarter than you.
I agree that the risk of war is concentrated in changes in political conditions, and that the post-Cold War trough in conflict is too small to draw inferences from. Re the tentative trend, Pinker’s assembled evidence goes back a long time, and covers many angles. It may fail to continue, and a nuclear war could change conditions thereafter, but there are many data points over time. If you want to give detail, feel free.
I would prefer to use representative expert opinion data from specialists in all the related fields (the nuclear scientists, political scientists, diplomats, etc), and the the work of panels trying to assess the problem, and would defer to expert consensus in their various areas of expertise (as with the climate science). But one can’t update on views that have not been made known. Martin Hellman has called for an organized effort to estimate the risk, but without success as yet. I have been raising the task of better eliciting expert opinion and improving forecasting in this area, and worked to get it on the agenda at the FHI (as I did re the FHI survey of the most cited AI academics) and at other organizations. Where I have found information about experts’ views I shared it.
And re: Pinker: If you had a bit more experience with trends on a necessarily very noisy data—you would realize that such trends are virtually irrelevant with regards to the probability of encountering some extremes (especially when those are not even that extreme—preceding the cold war, you have Hitler). It’s the exact same mistake committed by particularly low brow republicans when they go on about “ha ha, global warming” during a cold spell—because they think that a trend in noisy data has huge impact on individual data points.
edit: furthermore, Pinker’s data is on violence per capita—the total violence increased, it’s just that the violence seems to scale sub-linearly with population. Population is growing, as well as the number of states with nuclear weapons.
By total violence I mean the number of people dying (due to wars and other violence). The rate of wars, given the huge variation in the war size, is not a very useful metric.
I frankly don’t see how, having on one hand trends by Pinker, and on the other hand, adoption of modern technologies in the regions far behind on any such trends, and developments of new technologies, you have the trends by Pinker outweight that.
On the general change, for 2100, we’re speaking of 86 years. That’s the time span in which Russian Empire of 1900 transformed to Soviet Union of 1986 , complete with two world wars and invention of nuclear weapons followed by thermonuclear weapons.
That’s a time span more than long enough for it to be far more likely than not that entirely unpredictable technological advancements will be made in multitude of fields that have impact on the ease and cost of manufacturing of nuclear weapons. Enrichment is incredibly inefficient, with a huge room for improvement. Go read the wikipedia page on enrichment, then assume a much larger number of methods which could be improved. Conditional on continued progress, of course.
The political changes that happen in that sort of timespan are even less predictable.
Ultimately, what you have is that the estimates should regress towards ignorance prior over time.
Now as for the “existential risk” rhetoric… The difference between 9.9 billions dying out of 10 billions, and 9.9 billions dying out of 9.9 billions, is primarily aesthetic in nature. It’s promoted as the supreme moral difference primarily by people with other agendas, such as “making a living from futurist speculation”.
Now as for the “existential risk” rhetoric… The difference between 9.9 billions dying out of 10 billions, and 9.9 billions dying out of 9.9 billions, is primarily aesthetic in nature. It’s promoted as the supreme moral difference primarily by people with other agendas, such as “making a living from futurist speculation”.
Not if you care about future generations. If everybody dies, there are no future generations. If 100 million people survive, you can possibly rebuild civilization.
(If the 100 million eventually die out too, without finding any way to sustain the species, and it just takes longer, that’s still an existential catastrophe.)
Not if you care about future generations. If everybody dies, there are no future generations. If 100 million people survive, you can possibly rebuild civilization.
I care about the well being of the future people, but not their mere existence. As do most people who don’t disapprove of birth control but do disapprove of, for example, drinking while pregnant.
Let’s postulate a hypothetical tiny universe, where you have Adam and Eve except they are sort of like horse and donkey—any children they’ll have are certain to be sterile. The food is plentiful etc etc. Is it supremely important that they have a large number of (certainly sterile) children?
Declare conflict of interest at least, so everyone can ignore you when you say that the “existential risk” due to nuclear war is small, or when you define the “existential risk” in the first place just to create a big new scary category which you can argue is dominated by AI risk.
With regards to wide trends, there’s a: big uncertainty that the trend in question even meaningfully exists (and is not a consequence of e.g. longer recovery times after wars due to increased severity), and b: its sort of like using global warming to try to estimate how cold the cold spells can get. The problem with cold war, is that things could be a lot worse than cold war, and indeed were not that long ago (surely no leader in the cold war was even remotely as bad as Hitler).
Likewise, the model uncertainty for the consequences of the total war between nuclear superpowers (who are also bioweapon superpowers etc etc) is huge. We get thrown back, and all the big predatory and prey species get extinct, opening up new evolutionary niches for us primates to settle into. Do you think we just nuke each other a little and shake hands afterwards?
You convert this huge uncertainty into as low existential risk as you can possibly bend things without consciously thinking of yourself as acting in bad faith.
You do exact same thing with the consequences of, say, “hard takeoff”, in the other direction, where the model uncertainty is very high too. I don’t even believe that hard takeoff of an expected utility maximizer (as opposed to magical utility maximizer which does not have any hypotheses that are not empirically distinguishable, but instead knows everything exactly) is that much of an existential risk to begin with. AI’s decision making core can not ever be sure it’s not some sort of test run (which may not even be fully simulating the AI).
In unit tests killing the creators is going to be likely to get you terminated and tweaked.
The point is there is a very huge model uncertainty about even the paperclip maximizer killing all humans (and far larger uncertainty about the relevance), but you aren’t pushing it in the lower direction with same prejudice as you do for the consequences of the nuclear war.
Then there’s the question, existence of what has to be at risk for you to use the phrase “existential risk”? The whole universe? Earth originating intelligence in general? Earth originating biological intelligences? Human-originated intelligences? What’s about continued existence of our culture and our values? Clearly the exact definition that you’re going to use is carefully picked here as to promote pet issues. Could’ve been the existence of the universe, given a pet issue of future accelerators triggering vacuum decay.
You have fully convinced me that giving money towards self proclaimed “existential risk research” (in reality, funding creation of disinformation and biasing, easily identified by the fact that it’s not “risk” but “existential risk”) has negative utility in terms of anything I or most people on the Earth actually value. Give you much more money and you’ll fund a nuclear winter denial campaign. Nuclear war is old and boring, robots are new and shiny...
edit: and to counter a known objection that “existential risk” may be raising awareness for other types of risks as a side effect. It’s a market, the decisions what to buy and what not to buy influence the kind of research that is supplied.
I think that your analysis is underestimating risks from nuclear weapons—I don’t say nuclear war, because it is two different stories somehow.
Some points to consider:
The all humanity could be killed by just one nuclear weapon if one put inside one good supervolcano and cause it to erupt. I estimate that using 100 nuclear warheads one could provoke eruptions of maybe 20 supervolcanos.
Some rogue country could create stationary doomsday bomb which could create enough radioactive fallout to kill most of humanity.
Nuclear weapons production is going became much simple and cheaper because of laser enrichment and other thing.
A rogue superpower—may I use this oxymoron? - could attack 400 existing nuclear reactors and nuclear waste stores with its missiles creating fallout equal to doomsday machine.
Time of war is time of accelerated development of different weapons, and even limited nuclear war will lead to development of large new nanothec and biotech arsenals. Like WW2 lead to creating nuclear weapons.
In time of nuclear war there are chances that existing stockpiles of bioweapons would be accidentally released. Even North Korea is said to have weaponized bird flu.
I wrote more about these and other options in the article:
“Worst Case Scenario of Nuclear Accidents—human extinction” http://www.scribd.com/doc/52440799/Worst-Case-Scenario-of-Nuclear-Accidents-human-extinction
and in my book “Structure of global catastrophe”.
Thanks for mentioning these, and the link. I was putting some of these possibilities under the category of technological tshifts (like laser enrichment). Existing bioweapons don’t seem to be extinction risks, but future super-biotech threats I would put under the category of “transformative technologies” including super synthetic bio and AI that the post sets aside for purposes of looking at nukes alone.
Anders Sandberg has also written about the radiation dispersal/cobalt bomb and volcano trigger approaches.
Keep in mind that in a nuclear war, even if the nuclear reactors are not particularly well targeted, many (most?) reactors are going to melt down due to having been left unattended, and spent fuel pools may catch fire too.
@Carl:
I think you dramatically under-estimate both the probability and the consequences of the nuclear war (by ignoring the non-small probability of massive worsening of the political relations, or reversal of tentative trends of less warfare).
That’s quite annoying to see, the self proclaimed “existential risk experts” (professional mediocrities) increasing the risks through undermining and under-estimating things that are not fancy pet causes from the modern popular culture. Leave it to the actual scientists to occasionally give their opinions about, please, they’re simply smarter than you.
I agree that the risk of war is concentrated in changes in political conditions, and that the post-Cold War trough in conflict is too small to draw inferences from. Re the tentative trend, Pinker’s assembled evidence goes back a long time, and covers many angles. It may fail to continue, and a nuclear war could change conditions thereafter, but there are many data points over time. If you want to give detail, feel free.
I would prefer to use representative expert opinion data from specialists in all the related fields (the nuclear scientists, political scientists, diplomats, etc), and the the work of panels trying to assess the problem, and would defer to expert consensus in their various areas of expertise (as with the climate science). But one can’t update on views that have not been made known. Martin Hellman has called for an organized effort to estimate the risk, but without success as yet. I have been raising the task of better eliciting expert opinion and improving forecasting in this area, and worked to get it on the agenda at the FHI (as I did re the FHI survey of the most cited AI academics) and at other organizations. Where I have found information about experts’ views I shared it.
Carl, Dymytry/private_messaging is a known troll, and not worth your time to respond to.
And re: Pinker: If you had a bit more experience with trends on a necessarily very noisy data—you would realize that such trends are virtually irrelevant with regards to the probability of encountering some extremes (especially when those are not even that extreme—preceding the cold war, you have Hitler). It’s the exact same mistake committed by particularly low brow republicans when they go on about “ha ha, global warming” during a cold spell—because they think that a trend in noisy data has huge impact on individual data points.
edit: furthermore, Pinker’s data is on violence per capita—the total violence increased, it’s just that the violence seems to scale sub-linearly with population. Population is growing, as well as the number of states with nuclear weapons.
Did you not read the book? He shows big declines in rates of wars, not just per capita damage from war.
By total violence I mean the number of people dying (due to wars and other violence). The rate of wars, given the huge variation in the war size, is not a very useful metric.
I frankly don’t see how, having on one hand trends by Pinker, and on the other hand, adoption of modern technologies in the regions far behind on any such trends, and developments of new technologies, you have the trends by Pinker outweight that.
On the general change, for 2100, we’re speaking of 86 years. That’s the time span in which Russian Empire of 1900 transformed to Soviet Union of 1986 , complete with two world wars and invention of nuclear weapons followed by thermonuclear weapons.
That’s a time span more than long enough for it to be far more likely than not that entirely unpredictable technological advancements will be made in multitude of fields that have impact on the ease and cost of manufacturing of nuclear weapons. Enrichment is incredibly inefficient, with a huge room for improvement. Go read the wikipedia page on enrichment, then assume a much larger number of methods which could be improved. Conditional on continued progress, of course.
The political changes that happen in that sort of timespan are even less predictable.
Ultimately, what you have is that the estimates should regress towards ignorance prior over time.
Now as for the “existential risk” rhetoric… The difference between 9.9 billions dying out of 10 billions, and 9.9 billions dying out of 9.9 billions, is primarily aesthetic in nature. It’s promoted as the supreme moral difference primarily by people with other agendas, such as “making a living from futurist speculation”.
Not if you care about future generations. If everybody dies, there are no future generations. If 100 million people survive, you can possibly rebuild civilization.
(If the 100 million eventually die out too, without finding any way to sustain the species, and it just takes longer, that’s still an existential catastrophe.)
I care about the well being of the future people, but not their mere existence. As do most people who don’t disapprove of birth control but do disapprove of, for example, drinking while pregnant.
Let’s postulate a hypothetical tiny universe, where you have Adam and Eve except they are sort of like horse and donkey—any children they’ll have are certain to be sterile. The food is plentiful etc etc. Is it supremely important that they have a large number of (certainly sterile) children?
Declare conflict of interest at least, so everyone can ignore you when you say that the “existential risk” due to nuclear war is small, or when you define the “existential risk” in the first place just to create a big new scary category which you can argue is dominated by AI risk.
With regards to wide trends, there’s a: big uncertainty that the trend in question even meaningfully exists (and is not a consequence of e.g. longer recovery times after wars due to increased severity), and b: its sort of like using global warming to try to estimate how cold the cold spells can get. The problem with cold war, is that things could be a lot worse than cold war, and indeed were not that long ago (surely no leader in the cold war was even remotely as bad as Hitler).
Likewise, the model uncertainty for the consequences of the total war between nuclear superpowers (who are also bioweapon superpowers etc etc) is huge. We get thrown back, and all the big predatory and prey species get extinct, opening up new evolutionary niches for us primates to settle into. Do you think we just nuke each other a little and shake hands afterwards?
You convert this huge uncertainty into as low existential risk as you can possibly bend things without consciously thinking of yourself as acting in bad faith.
You do exact same thing with the consequences of, say, “hard takeoff”, in the other direction, where the model uncertainty is very high too. I don’t even believe that hard takeoff of an expected utility maximizer (as opposed to magical utility maximizer which does not have any hypotheses that are not empirically distinguishable, but instead knows everything exactly) is that much of an existential risk to begin with. AI’s decision making core can not ever be sure it’s not some sort of test run (which may not even be fully simulating the AI).
In unit tests killing the creators is going to be likely to get you terminated and tweaked.
The point is there is a very huge model uncertainty about even the paperclip maximizer killing all humans (and far larger uncertainty about the relevance), but you aren’t pushing it in the lower direction with same prejudice as you do for the consequences of the nuclear war.
Then there’s the question, existence of what has to be at risk for you to use the phrase “existential risk”? The whole universe? Earth originating intelligence in general? Earth originating biological intelligences? Human-originated intelligences? What’s about continued existence of our culture and our values? Clearly the exact definition that you’re going to use is carefully picked here as to promote pet issues. Could’ve been the existence of the universe, given a pet issue of future accelerators triggering vacuum decay.
You have fully convinced me that giving money towards self proclaimed “existential risk research” (in reality, funding creation of disinformation and biasing, easily identified by the fact that it’s not “risk” but “existential risk”) has negative utility in terms of anything I or most people on the Earth actually value. Give you much more money and you’ll fund a nuclear winter denial campaign. Nuclear war is old and boring, robots are new and shiny...
edit: and to counter a known objection that “existential risk” may be raising awareness for other types of risks as a side effect. It’s a market, the decisions what to buy and what not to buy influence the kind of research that is supplied.